9 research outputs found

    Crowdsourcing the creation of image segmentation algorithms for connectomics

    Get PDF
    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge

    Assessment of algorithms for mitosis detection in breast cancer histopathology images

    Get PDF
    The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists

    Multi-task learning of a deep K-nearest neighbour network for histopathological image classification and retrieval.

    No full text
    Deep neural networks have achieved tremendous success in image recognition, classification and object detection. However, deep learning is often criticised for its lack of transparency and general inability to rationalise its predictions. The issue of poor model interpretability becomes critical in medical applications: a model that is not understood and trusted by physicians is unlikely to be used in daily clinical practice. In this work, we develop a novel multi-task deep learning framework for simultaneous histopathology image classification and retrieval, leveraging on the classic concept of k-nearest neighbours to improve model interpretability. For a test image, we retrieve the most similar images from our training databases. These retrieved nearest neighbours can be used to classify the test image with a confidence score, and provide a human-interpretable explanation of our classification. Our original framework can be built on top of any existing classification network (and therefore benefit from pretrained models), by (i) combining a triplet loss function with a novel triplet sampling strategy to compare distances between samples and (ii) adding a Cauchy hashing loss function to accelerate neighbour searching. We evaluate our method on colorectal cancer histology slides and show that the confidence estimates are strongly correlated with model performance. Nearest neighbours are intuitive and useful for expert evaluation. They give insights into understanding possible model failures, and can support clinical decision making by comparing archived images and patient records with the actual case

    A machine learning approach to visual perception of forest trails for mobile robots

    Full text link
    We study the problem of perceiving forest or mountain trails from a single monocular image acquired from the viewpoint of a robot traveling on the trail itself. Previous literature focused on trail segmentation, and used low-level features such as image saliency or appearance contrast; we propose a different approach based on a deep neural network used as a supervised image classifier. By operating on the whole image at once, our system outputs the main direction of the trail compared to the viewing direction. Qualitative and quantitative results computed on a large real-world dataset (which we provide for download) show that our approach outperforms alternatives, and yields an accuracy comparable to the accuracy of humans that are tested on the same image classification task. Preliminary results on using this information for quadrotor control in unseen trails are reported. To the best of our knowledge, this is the first letter that describes an approach to perceive forest trials, which is demonstrated on a quadrotor micro aerial vehicle

    Deep learning with convolutional neural networks for histopathology image analysis

    No full text
    In the recent years, deep learning based methods and, in particular, convolutional neural networks, have been dominating the arena of medical image analysis. This has been made possible both with the advent of new parallel hardware and the development of efficient algorithms. It is expected that future advances in both of these directions will increase this domination. The application of deep learning methods to medical image analysis has been shown to significantly improve the accuracy and efficiency of the diagnoses. In this chapter, we focus on applications of deep learning in microscopy image analysis and digital pathology, in particular. We provide an overview of the state-of-the-art methods in this area and exemplify some of the main techniques. Finally, we discuss some open challenges and avenues for future work

    The role of deep learning in improving healthcare

    No full text
    \u3cp\u3eHealthcare is transforming through adoption of information technologies (IT) and digitalization. Machine learning (ML) and artificial intelligence (AI) are two of the IT technologies that are leading this transformation. In this chapter we focus on Deep Learning (DL), a subfield of ML that relies on deep artificial neural networks to deliver breakthroughs in long-standing AI problems. DL is about working with high-dimensional data (e.g., images, speech recording, natural language) and learning efficient representations that allow for building successful models. We present a structured overview of DL methods applied to healthcare problems based on their suitability of the different technologies to the available modalities of healthcare data. This data-centric perspective reflects the data-driven nature of DL methods and allows side-by-side comparison with different domains in healthcare. Challenges, in broad adoption of DL, are commonly related to some of its main drawbacks, particularly lack of interpretability and transparency. We discuss the drawbacks and limitations of DL technology that specifically come to light in the domain of healthcare. We also address the need for a considerable amount of data and annotations to successfully build these models that can be a particularly expensive and time-consuming effort. Overall, the chapter offers insights into existing applications of DL to healthcare on their suitability for specific types of data and their limitations.\u3c/p\u3
    corecore